Maximin Analysis of Message Passing Algorithms for Recovering Block Sparse Signals

نویسندگان

  • Armeen Taeb
  • Arian Maleki
  • Christoph Studer
  • Richard G. Baraniuk
چکیده

We consider the problem of recovering a block (or group) sparse signal from an underdetermined set of random linear measurements, which appear in compressed sensing applications such as radar and imaging. Recent results of Donoho, Johnstone, and Montanari have shown that approximate message passing (AMP) in combination with Stein’s shrinkage outperforms group LASSO for large block sizes. In this paper, we prove that for a fixed block size and in the strong undersampling regime (i.e., having very few measurements compared to the ambient dimension), AMP cannot improve upon group LASSO, thereby complementing the results of Donoho et al. I. PROBLEM STATEMENT We analyze the recovery of a block (or group) sparse signal x ∈ R with at most k nonzero entries from an underdetermined set of linear measurements y = Ax, where A ∈ Rn×N is i.i.d zeromean Gaussian with unit variance. We consider the asymptotic setting where δ = n/N and ρ = k/n represent the undersampling and sparsity parameters, respectively, and N,n, k →∞; we furthermore assume that all blocks are of the same size B. The signal x is partitioned into M blocks with N =MB. In what follows, we will denote a particular block by xB . Suppose that the elements of xB are drawn from a distribution F (xB) = (1 − )δ0(‖xB‖2) + G(xB), where = ρδ, and δ0 is the Dirac delta function; G is a probability distribution that is typically unknown in practice. We define the block soft-thresholding function as follows [1]: η(y; τ) , yB/‖yB‖2 max { ‖yB‖2 − τ, 0 } . (1) Two popular algorithms for recovering block sparse signals in compressed sensing are group LASSO [2] and approximate message passing (AMP) [3]. Group LASSO searches for a vector x that solves x , argminx{ ∑M B=1 ‖xB‖2 : y = Ax}. AMP is an iterative algorithm for computing x. Concretely, AMP is initialized by x = 0 and z = 0, and iteratively performs the following computations [4]: x = η(x +Ax) and z = y −Ax + c. Here, c is a correction term that depends on the previous iterations to significantly improve the convergence of AMP; x is the sparse estimate at iteration t, and η is a nonlinear function that imposes (block) sparsity. The performance of compressed sensing recovery algorithms can be characterized accurately by their phase-transition (PT) behavior. Specifically, we define a two-dimensional phase space (δ, ρ) ∈ [0, 1] that is partitioned into two regions: “success” and “failure”, with these regions separated by the PT curve (δ, ρ(δ)). For the same value of δ, algorithms with higher PT outperform algorithms with lower PT, i.e., guarantee the exact recovery for more nonzero entries k.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Empirical-Bayes Approaches to Recovery of Structured Sparse Signals via Approximate Message Passing

In recent years, there have been massive increases in both the dimensionality andsample sizes of data due to ever-increasing consumer demand coupled with relativelyinexpensive sensing technologies. These high-dimensional datasets bring challengessuch as complexity, along with numerous opportunities. Though many signals ofinterest live in a high-dimensional ambient space, they of...

متن کامل

How to Design Message Passing Algorithms for Compressed Sensing

Finding fast first order methods for recovering signals from compressed measurements is a problem of interest in applications ranging from biology to imaging. Recently, the authors proposed a class of low-complexity algorithms called approximate message passing or AMP. The new algorithms were shown, through extensive simulations and mathematical analysis, to exhibit very fast convergence rate a...

متن کامل

Approximate Message Passing for Underdetermined Audio Source Separation

Approximate message passing (AMP) algorithms have shown great promise in sparse signal reconstruction due to their low computational requirements and fast convergence to an exact solution. Moreover, they provide a probabilistic framework that is often more intuitive than alternatives such as convex optimisation. In this paper, AMP is used for audio source separation from underdetermined instant...

متن کامل

Codes on Graphs and Analysis of Iterative Algorithms for Reconstructing Sparse Signals and Decoding of Check-Hybrid GLDPC Codes

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1. Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.1. Block Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.1.1. Linear Block Codes . . . . . . . . . . . . . . . . . . 22 1.2. Commun...

متن کامل

MMSE denoising of sparse Lévy processes via message passing

Many recent algorithms for sparse signal recovery can be interpreted as maximum-a-posteriori (MAP) estimators relying on some specific priors. From this Bayesian perspective, state-of-the-art methods based on discrete-gradient regularizers, such as total-variation (TV) minimization, implicitly assume the signals to be sampled instances of Lévy processes with independent Laplace-distributed incr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1303.2389  شماره 

صفحات  -

تاریخ انتشار 2013